Certified Convergent Perceptron Learning

نویسندگان

  • Timothy Murphy
  • Patrick Gray
  • Gordon Stewart
  • Frank Rosenblatt
چکیده

Frank Rosenblatt invented the Perceptron algorithm in 1957 as part of an early attempt to build “brain models” – artificial neural networks. In this paper, we apply tools from symbolic logic – dependent type theory as implemented in the interactive theorem prover Coq – to prove that one-layer perceptrons for binary classification converge when trained on linearly separable datasets (the Perceptron convergence theorem). We perform experiments to evaluate the performance of our Coq Perceptron vs. a C++ implementation and against a hybrid implementation in which separators learned in C++ are certified in Coq. We find that by carefully optimizing the extraction of our Coq perceptron, we can meet – and occasionally exceed – the performance of the C++ implementation. Our work is both proof engineering and intellectual archaeology: Even classic machine learning algorithms (and to a lesser degree, termination proofs) are understudied in the interactive theorem proving literature. At the same time, recasting Perceptron and its convergence proof in the language of 21st century human-assisted theorem provers may illuminate, for a fresh audience, a small but interesting corner of the history of ideas.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Provably Convergent Dynamic Training Method for Multi-layer Perceptron Networks

This paper presents a new method for training multi-layer perceptron networks called DMP1 (Dynamic Multi-layer Perceptron 1). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The individual nodes of the network are trained using a gentetic algorithm. The method is capable of handling real...

متن کامل

Error-driven Learning in Harmonic Grammar

The HG literature has adopted so far the Perceptron reweighing rule because of its convergence guarantees. Yet, this rule is not suited to HG, as it fails at ensuring non-negativity of the weights. The first contribution of this paper is a solution to this impasse. I consider a variant of the Perceptron which truncates any update at zero, thus maintaining the weights non-negative in a principle...

متن کامل

Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using reg...

متن کامل

A Fast and Convergent Stochastic Learning Algorithm for MLP

We propose a stochastic learning algorithm for multilayer perceptrons of linearthreshold function units, which theoretically converges with probability one and experimentally (for the three-layer network case) exhibits 100% convergence rate and remarkable speed on parity and simulated problems. On the parity problems (to realize the n bit parity function by n (minimal) hidden units) the algorit...

متن کامل

Harmonic Grammar, Gradual Learning, and Phonological Gradience

(1) i. HG is (perhaps surprisingly) restrictive, due to inherent limitations on the types of languages that can be generated by an optimization system (Bhatt et al. 2007; Pater et al. 2007) ii. HG is compatible with a simple correctly convergent gradual learning algorithm, the Perceptron algorithm of Rosenblatt (1958) (Boersma and Pater 2007; Pater 2007a; see Jäger 2006, Soderstrom et al. 2006 ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016